Identifying Gender Differences in Multimodal Emotion Recognition Using Bimodal Deep AutoEncoder

نویسندگان

  • Xue Yan
  • Wei-Long Zheng
  • Wei Liu
  • Bao-Liang Lu
چکیده

This paper mainly focuses on investigating the differences between males and females in emotion recognition using electroencephalography (EEG) and eye movement data. Four basic emotions are considered, namely happy, sad, fearful and neutral. The Bimodal Deep AutoEncoder (BDAE) and the fuzzy-integral-based method are applied to fuse EEG and eye movement data. Our experimental results indicate that gender differences do exist in neural patterns for emotion recognition; eye movement data is not as good as EEG data for examining gender differences in emotion recognition; the activation of the brains for females is generally lower than that for males in most bands and brain areas especially for fearful emotions. According to the confusion matrix, we observe that the fearful emotion is more diverse among women compared with men, and men behave more diversely on the sad emotion compared with women. Additionally, individual differences in fear are more pronounced than other three emotions for females.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Multimodal Emotion Recognition Using Deep Neural Networks

The change of emotions is a temporal dependent process. In this paper, a Bimodal-LSTM model is introduced to take temporal information into account for emotion recognition with multimodal signals. We extend the implementation of denoising autoencoders and adopt the Bimodal Deep Denoising AutoEncoder modal. Both models are evaluated on a public dataset, SEED, using EEG features and eye movement ...

متن کامل

Multimodal Emotion Recognition Using Multimodal Deep Learning

To enhance the performance of affective models and reduce the cost of acquiring physiological signals for real-world applications, we adopt multimodal deep learning approach to construct affective models from multiple physiological signals. For unimodal enhancement task, we indicate that the best recognition accuracy of 82.11% on SEED dataset is achieved with shared representations generated by...

متن کامل

Emotion Recognition Using Multimodal Deep Learning

To enhance the performance of affective models and reduce the cost of acquiring physiological signals for real-world applications, we adopt multimodal deep learning approach to construct affective models with SEED and DEAP datasets to recognize different kinds of emotions. We demonstrate that high level representation features extracted by the Bimodal Deep AutoEncoder (BDAE) are effective for e...

متن کامل

Adaptation Aftereffects in Vocal Emotion Perception Elicited by Expressive Faces and Voices

The perception of emotions is often suggested to be multimodal in nature, and bimodal as compared to unimodal (auditory or visual) presentation of emotional stimuli can lead to superior emotion recognition. In previous studies, contrastive aftereffects in emotion perception caused by perceptual adaptation have been shown for faces and for auditory affective vocalization, when adaptors were of t...

متن کامل

Speech Emotion Recognition Using Scalogram Based Deep Structure

Speech Emotion Recognition (SER) is an important part of speech-based Human-Computer Interface (HCI) applications. Previous SER methods rely on the extraction of features and training an appropriate classifier. However, most of those features can be affected by emotionally irrelevant factors such as gender, speaking styles and environment. Here, an SER method has been proposed based on a concat...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017